87 research outputs found
Statistical Piano Reduction Controlling Performance Difficulty
We present a statistical-modelling method for piano reduction, i.e.
converting an ensemble score into piano scores, that can control performance
difficulty. While previous studies have focused on describing the condition for
playable piano scores, it depends on player's skill and can change continuously
with the tempo. We thus computationally quantify performance difficulty as well
as musical fidelity to the original score, and formulate the problem as
optimization of musical fidelity under constraints on difficulty values. First,
performance difficulty measures are developed by means of probabilistic
generative models for piano scores and the relation to the rate of performance
errors is studied. Second, to describe musical fidelity, we construct a
probabilistic model integrating a prior piano-score model and a model
representing how ensemble scores are likely to be edited. An iterative
optimization algorithm for piano reduction is developed based on statistical
inference of the model. We confirm the effect of the iterative procedure; we
find that subjective difficulty and musical fidelity monotonically increase
with controlled difficulty values; and we show that incorporating sequential
dependence of pitches and fingering motion in the piano-score model improves
the quality of reduction scores in high-difficulty cases.Comment: 12 pages, 7 figures, version accepted to APSIPA Transactions on
Signal and Information Processin
Automatic Chord Estimation Based on a Frame-wise Convolutional Recurrent Neural Network with Non-Aligned Annotations
International audienceThis paper describes a weakly-supervised approach to Automatic Chord Estimation (ACE) task that aims to estimate a sequence of chords from a given music audio signal at the frame level, under a realistic condition that only non-aligned chord annotations are available. In conventional studies assuming the availability of time-aligned chord annotations, Deep Neural Networks (DNNs) that learn frame-wise mappings from acoustic features to chords have attained excellent performance. The major drawback of such frame-wise models is that they cannot be trained without the time alignment information. Inspired by a common approach in automatic speech recognition based on non-aligned speech transcriptions, we propose a two-step method that trains a Hidden Markov Model (HMM) for the forced alignment between chord annotations and music signals, and then trains a powerful frame-wise DNN model for ACE. Experimental results show that although the frame-level accuracy of the forced alignment was just under 90%, the performance of the proposed method was degraded only slightly from that of the DNN model trained by using the ground-truth alignment data. Furthermore, using a sufficient amount of easily collected non-aligned data, the proposed method is able to reach or even outperform the conventional methods based on ground-truth time-aligned annotations
Rhythm Transcription of Polyphonic MIDI Performances Based on a Merged-output HMM for Multiple Voices
(Abstract to follow
A Deep Generative Model of Speech Complex Spectrograms
This paper proposes an approach to the joint modeling of the short-time
Fourier transform magnitude and phase spectrograms with a deep generative
model. We assume that the magnitude follows a Gaussian distribution and the
phase follows a von Mises distribution. To improve the consistency of the phase
values in the time-frequency domain, we also apply the von Mises distribution
to the phase derivatives, i.e., the group delay and the instantaneous
frequency. Based on these assumptions, we explore and compare several
combinations of loss functions for training our models. Built upon the
variational autoencoder framework, our model consists of three convolutional
neural networks acting as an encoder, a magnitude decoder, and a phase decoder.
In addition to the latent variables, we propose to also condition the phase
estimation on the estimated magnitude. Evaluated for a time-domain speech
reconstruction task, our models could generate speech with a high perceptual
quality and a high intelligibility
- …